131 research outputs found

    ERP analysis of cognitive sequencing : a left-anterior negativity related to structural transformation processing

    Get PDF
    A major objective of cognitive neuroscience is to identify those neurocomputational processes that may be shared by multiple cognitive functions vs those that are highly specifc. This problem of identifying general vs specialized functions is of particular interest in the domain of language processing. Within this domain, event related brain potential (ERP) studies have demonstrated a left anterior negativity (LAN) in a range 300 to 700 ms, associated with syntactic processing, often linked to grammatical function words. These words have little or no semantic content, but rather play a role in encoding syntactic structure required for parsing. In the current study we test the hypothesis that the LAN reflects the operation of a more general sequence processing capability in which special symbols encode structural information that, when combined with past elements in the sequence, allows the prediction of successor elements. We recorded ERPs during a non-linguistic sequencing task that required subjects (nĂ 10) to process special symbols possessing the functional property defined above. When compared to ERPs in a control condition, function symbol processing elicits a left anterior negative shift between with temporal and spatial characteristics quite similar to the LAN described during function word processing in language, supporting our hypothesis. These results are discussed in the context of related studies of syntactic and cognitive sequence processing

    Developmental Stages of Perception and Language Acquisition in a Perceptually Grounded Robot

    Get PDF
    The objective of this research is to develop a system for language learning based on a minimum of pre-wired language-specific functionality, that is compatible with observations of perceptual and language capabilities in the human developmental trajectory. In the proposed system, meaning (in terms of descriptions of events and spatial relations) is extracted from video images based on detection of position, motion, physical contact and their parameters. Mapping of sentence form to meaning is performed by learning grammatical constructions that are retrieved from a construction inventory based on the constellation of closed class items uniquely identifying the target sentence structure. The resulting system displays robust acquisition behavior that reproduces certain observations from developmental studies, with very modest “innate” language specificity

    Spoken Language and Vision for Adaptive Human-Robot Cooperation

    Get PDF

    Improving Quality of Life with a Narrative Robot Companion: II - Creating Group Cohesion via Shared Narrative Experience

    Full text link
    One of the most difficult things for social robots is to enter the realm of human social relations. Here, we exploit recent advances in natural language processing (NLP) that provide robots access to human experience, which can allow them to enter into human social relations. When people are put together in arbitrary conditions, as in a home for the elderly, it can be difficult for them to share experience. To address this, we use recent advances in NLP to allow the robot to discover shared narratives between such group members. Our narrative companion extends the human capability to make social narrative links for building group coherence through sharing experience.Requirements are identified for a narrative companion to allow individuals within a group to focus their interactions on shared experiences and interests, to improve group coherence. The system should collect and organize members' experiences, and should discover semantic similarity between different members' experiences in order to create a group narrative. It should then accompany the group in their cohesion-enhancing experience of this narrative. Based on these requirements, and extending our previous work, we implement the V2.0 narrative companion on the Pepper robot.The system is validated in a case study where participants provide 5 favorite photographs and short answers to questions. The Narrative Semantic Similarity Analysis System (NarSim) generates a meaningful trajectory of narrative linking people and events depicted in the photographs. With this, Pepper then accompanies the group and prompts group members to enrich the shared narrative, to further enhance the group pleasure and cohesion. Results are presented, and future applications for improved quality of life are discussed.T. Uchida, H. Ishiguro and P. F. Dominey, "Improving Quality of Life with a Narrative Robot Companion: II – Creating Group Cohesion via Shared Narrative Experience*," 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 2020, pp. 906-913, doi: 10.1109/RO-MAN47096.2020.9223600.The 29th IEEE International Conference on Robot & Human Interactive Communication [31 AUG - 04 SEPT, 2020

    I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation

    Get PDF
    Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times

    Large Language Models as Superpositions of Cultural Perspectives

    Full text link
    Large Language Models (LLMs) are often misleadingly recognized as having a personality or a set of values. We argue that an LLM can be seen as a superposition of perspectives with different values and personality traits. LLMs exhibit context-dependent values and personality traits that change based on the induced perspective (as opposed to humans, who tend to have more coherent values and personality traits across contexts). We introduce the concept of perspective controllability, which refers to a model's affordance to adopt various perspectives with differing values and personality traits. In our experiments, we use questionnaires from psychology (PVQ, VSM, IPIP) to study how exhibited values and personality traits change based on different perspectives. Through qualitative experiments, we show that LLMs express different values when those are (implicitly or explicitly) implied in the prompt, and that LLMs express different values even when those are not obviously implied (demonstrating their context-dependent nature). We then conduct quantitative experiments to study the controllability of different models (GPT-4, GPT-3.5, OpenAssistant, StableVicuna, StableLM), the effectiveness of various methods for inducing perspectives, and the smoothness of the models' drivability. We conclude by examining the broader implications of our work and outline a variety of associated scientific questions. The project website is available at https://sites.google.com/view/llm-superpositions .Comment: Preprin

    Towards a Platform-Independent Cooperative Human Robot Interaction System: III. An Architecture for Learning and Executing Actions and Shared Plans

    Get PDF
    Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing and interacting with humans. The current research addresses this challenge. We present results from a cooperative human-robot interaction system that has been specifically developed for portability between different humanoid platforms, by abstraction layers at the perceptual and motor interfaces. In the perceptual domain, the resulting system is demonstrated to learn to recognize objects and to recognize actions as sequences of perceptual primitives, and to transfer this learning, and recognition, between different robotic platforms. For execution, composite actions and plans are shown to be learnt on one robot and executed successfully on a different one. Most importantly, the system provides the ability to link actions into shared plans, that form the basis of human-robot cooperation, applying principles from human cognitive development to the domain of robot cognitive systems. © 2009-2011 IEEE
    • 

    corecore